Current Virtual Reality (VR) environments lack the rich haptic signals that humans experience during real-life interactions, such as the sensation of texture during lateral movement on a surface. Adding realistic haptic textures to VR environments requires a model that generalizes to variations of a user's interaction and to the wide variety of existing textures in the world. Current methodologies for haptic texture rendering exist, but they usually develop one model per texture, resulting in low scalability. We present a deep learning-based action-conditional model for haptic texture rendering and evaluate its perceptual performance in rendering realistic texture vibrations through a multi part human user study. This model is unified over all materials and uses data from a vision-based tactile sensor (GelSight) to render the appropriate surface conditioned on the user's action in real time. For rendering texture, we use a high-bandwidth vibrotactile transducer attached to a 3D Systems Touch device. The result of our user study shows that our learning-based method creates high-frequency texture renderings with comparable or better quality than state-of-the-art methods without the need for learning a separate model per texture. Furthermore, we show that the method is capable of rendering previously unseen textures using a single GelSight image of their surface.
translated by 谷歌翻译
Everting, soft growing vine robots benefit from reduced friction with their environment, which allows them to navigate challenging terrain. Vine robots can use air pouches attached to their sides for lateral steering. However, when all pouches are serially connected, the whole robot can only perform one constant curvature in free space. It must contact the environment to navigate through obstacles along paths with multiple turns. This work presents a multi-segment vine robot that can navigate complex paths without interacting with its environment. This is achieved by a new steering method that selectively actuates each single pouch at the tip, providing high degrees of freedom with few control inputs. A small magnetic valve connects each pouch to a pressure supply line. A motorized tip mount uses an interlocking mechanism and motorized rollers on the outer material of the vine robot. As each valve passes through the tip mount, a permanent magnet inside the tip mount opens the valve so the corresponding pouch is connected to the pressure supply line at the same moment. Novel cylindrical pneumatic artificial muscles (cPAMs) are integrated into the vine robot and inflate to a cylindrical shape for improved bending characteristics compared to other state-of-the art vine robots. The motorized tip mount controls a continuous eversion speed and enables controlled retraction. A final prototype was able to repeatably grow into different shapes and hold these shapes. We predict the path using a model that assumes a piecewise constant curvature along the outside of the multi-segment vine robot. The proposed multi-segment steering method can be extended to other soft continuum robot designs.
translated by 谷歌翻译
Effective force modulation during tissue manipulation is important for ensuring safe robot-assisted minimally invasive surgery (RMIS). Strict requirements for in-vivo distal force sensing have led to prior sensor designs that trade off ease of manufacture and integration against force measurement accuracy along the tool axis. These limitations have made collecting high-quality 3-degree-of-freedom (3-DoF) bimanual force data in RMIS inaccessible to researchers. We present a modular and manufacturable 3-DoF force sensor that integrates easily with an existing RMIS tool. We achieve this by relaxing biocompatibility and sterilizability requirements while utilizing commercial load cells and common electromechanical fabrication techniques. The sensor has a range of +-5 N axially and +-3 N laterally with average root mean square errors(RMSEs) of below 0.15 N in all directions. During teleoperated mock tissue manipulation tasks, a pair of jaw-mounted sensors achieved average RMSEs of below 0.15 N in all directions. For grip force, it achieved an RMSE of 0.156 N. The sensor has sufficient accuracy within the range of forces found in delicate manipulation tasks, with potential use in bimanual haptic feedback and robotic force control. As an open-source design, the sensors can be adapted to suit additional robotic applications outside of RMIS.
translated by 谷歌翻译
将触觉反馈从指尖转移到手腕上的重新定位被认为是使与混合现实虚拟环境的触觉相互作用的一种方式,同时使手指免费完成其他任务。我们介绍了一对腕触觉触觉设备以及一个虚拟环境,以研究手指和触觉者之间的各种映射如何影响任务性能。腕部呈现的触觉反馈反映了由食指和拇指控制的虚拟物体和虚拟化头像之间发生的相互作用。我们进行了一项用户研究,比较了四个不同的手指触觉反馈映射和一个无反馈条件作为对照。我们评估了用户通过任务完成时间的指标,手指和虚拟立方体的路径长度以及在指尖处的正常和剪切力的大小来评估了用户执行简单的选择任务的能力。我们发现多次映射是有效的,并且当视觉提示受到限制时会产生更大的影响。我们讨论了方法的局限性,并描述了朝着腕部磨损设备进行多重自由度触觉渲染的下一步步骤,以改善虚拟环境中的任务性能。
translated by 谷歌翻译
使用神经网络的力估计是一种有前途的方法,可以在没有最终效应器力传感器的情况下在微创手术机器人中触觉反馈。已经提出了各种网络体系结构,但是没有通过手术操纵进行实时测试。因此,关于基于神经网络的力估计的力反馈的实时透明度和稳定性仍然存在问题。我们表征了使用具有仅视觉,仅州和州和视力输入的神经网络,表征了Da Vinci Research套件对手术机器人进行的力反馈的实时阻抗透明度和稳定性。网络接受了现有的无力反馈的遥控操作数据集培训。为了测量对操作员的力量反馈在远程操作过程中的实时稳定性和透明度,我们建模了一个一级自由的人和外科医生侧manipulandum,该机器人移动了患者端机器人,以在各种机器人上对硅胶进行操纵和相机配置和工具。我们发现,使用状态输入的网络显示出比仅视觉网络更透明的阻抗。但是,在硅胶的横向操纵过程中,基于州的网络在提供力反馈时显示出很大的不稳定性。相反,仅视力网络在所有评估的方向上均显示出一致的稳定性。我们在与人类遥控器的演示中证实了仅视觉网络实时力量反馈的性能。
translated by 谷歌翻译
在索法机器人辅助手术过程中,对相互作用的知识可用于使人向人类操作员进行反馈并评估组织处理技能。然而,最终效应器的直接力传感是具有挑战性的,因为它需要生物相容性,可消毒和具有成本效益的传感器。使用卷积神经网络基于视觉的深度学习是提供有用力估计的一种有前途的方法,尽管关于对新场景和实时推理的概括仍然存在问题。我们提出了使用RGB图像和机器人状态作为输入的力估计神经网络。我们使用自收集的数据集,将网络与仅包含单个输入类型的变体进行了比较,并评估了它们如何推广到新观点,工作区位置,材料和工具。我们发现,基于视觉的网络对观点的变化很敏感,而仅州的网络对工作空间的变化却是强大的。具有状态和视觉输入的网络对于看不见的工具具有最高的精度,并且对观点的变化非常强大。通过特征去除研究,我们发现仅使用位置功能比仅使用力特征作为输入而产生的精度更好。具有状态和视觉输入的网络的准确性优于基于物理的基线模型。它显示出可比的准确性,但计算时间比基线复发性神经网络更快,这使其更适合实时应用。
translated by 谷歌翻译
机器人辅助的微创手术(RMI)缺乏触觉反馈是在手术过程中安全组织处理的潜在障碍。贝叶斯建模理论表明,与没有经验的外科医生相比,在RMIS期间,具有开放或腹腔镜手术经验的外科医生可以发展为组织刚度的先验。为了测试先前的触觉经验是否导致远程操作的力估计能力提高,将33名参与者分配到三个训练条件之一:手动操纵,用力反馈的远程操作或无力反馈的远程操作,并学会了将硅胶样品张紧到一套力值。然后,他们被要求执行张力任务,以及先前未经触觉的任务,而无需反馈而在远程操作下进行不同的力量值。与远程操作组相比,手动组在训练的力量范围之外的张力任务中具有较高的力误差,但在低力水平下,在触诊任务中显示出更好的速度准确性功能。这表明训练方式的动力学会影响远程操作过程中的力估计能力,如果在与任务相同的动态下形成,则可以访问先前的触觉体验。
translated by 谷歌翻译
Learning from demonstration (LfD) is a proven technique to teach robots new skills. Data quality and quantity play a critical role in LfD trained model performance. In this paper we analyze the effect of enhancing an existing teleoperation data collection system with real-time haptic feedback; we observe improvements in the collected data throughput and its quality for model training. Our experiment testbed was a mobile manipulator robot that opened doors with latch handles. Evaluation of teleoperated data collection on eight real world conference room doors found that adding the haptic feedback improved the data throughput by 6%. We additionally used the collected data to train six image-based deep imitation learning models, three with haptic feedback and three without it. These models were used to implement autonomous door-opening with the same type of robot used during data collection. Our results show that a policy from a behavior cloning model trained with haptic data performed on average 11% better than its counterpart with no haptic feedback data, indicating that haptic feedback resulted in collection of a higher quality dataset.
translated by 谷歌翻译
腕上的触觉界面可以提供各种触觉线索,以传达信息和与虚拟对象的交互。与指尖不同,手腕和前臂提供了相当大的皮肤,可以放置多个触觉执行器,以显示以最小的保留性丰富触觉信息传递的显示。现有的多重自由度(DOF)腕上磨损的设备采用传统的刚性机器人机制和电动机来限制其​​多功能性,微型化,分配和组装。基于软弹性弹性执行器阵列的替代溶液仅构成1DOF触觉像素。较高的原型产生一个单个相互作用点,并需要复杂的手动组装过程,例如成型和粘合几个部分。这些方法限制了高功能紧凑型触觉显示,可重复性和可定制性的构建。在这里,我们介绍了一种新颖的,完全3D打印的,柔软,可穿戴的触觉显示器,以增加手腕和前臂上的触觉信息传递,并带有3-DOF触觉素(称为Hoxels)。我们的初始原型包括两个烟囱,可提供皮肤剪切,压力,扭曲,拉伸,挤压和其他任意刺激。每个Hoxel在X和Y轴中产生高达1.6 N的力,并在Z轴上产生高达20 n的力。我们的方法可以快速制造多功能和有力的触觉显示器。
translated by 谷歌翻译
就起搏器提供的信号(即,神心电图电测(EGM))和信号医生使用(即12-铅心电图(ECG))而言,存在差距以诊断出异常节律。因此,前者,即使远程传输,医生也不足以提供精确的诊断,更不用说更及时干预。为了缩短这种差距,并对即时响应不规则和不频繁的心室节律的即时反应进行启发式步骤,我们提出了一个新的框架被称为RT-RCG,以自动搜索(1)高效的深神经网络(DNN)结构和然后(2)相应的加速器,能够实现来自EGM信号的ECG信号的实时和高质量的重建。具体地,RT-RCG提出了一种针对EGM信号的ECG重建量身定制的新的DNN搜索空间,并结合了可分辨率的加速搜索(DAS)发动机,以有效地导航大而离散的加速器设计空间以产生优化的加速器。各种环境下的广泛实验和消融研究一致地验证了RT-RCG的有效性。据我们所知,RT-RCG是第一个利用神经结构搜索(NAS)来同时解决重建效能和效率的效率。
translated by 谷歌翻译